Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(benchmark): Add scenario for expressions with Set node #10647

Merged
merged 1 commit into from
Sep 3, 2024

Conversation

tomi
Copy link
Contributor

@tomi tomi commented Sep 3, 2024

Summary

Add scenario for expressions with Set node

image

image

Related Linear tickets, Github issues, and Community forum posts

CAT-89

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)

@n8n-assistant n8n-assistant bot added the n8n team Authored by the n8n team label Sep 3, 2024
Copy link
Member

@netroy netroy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the Set node, can we instead use something that's a bit more I/O bound?
From benchmarking perspective I find Set node too simplistic to provide any meaningful data that a benchmark without this node would not.

@tomi
Copy link
Contributor Author

tomi commented Sep 3, 2024

Instead of the Set node, can we instead use something that's a bit more I/O bound? From benchmarking perspective I find Set node too simplistic to provide any meaningful data that a benchmark without this node would not.

Any ideas what that might be? The idea is to test how fast we can evaluate bunch of expressions. I'm sure there will be bigger overhead here once expression evaluation becomes async with the introduction of agents.

@netroy
Copy link
Member

netroy commented Sep 3, 2024

The idea is to test how fast we can evaluate bunch of expressions.

IMO stuff like this is important,but we need to benchmark stuff like this in isolation.
Maybe we need benchmarking with unit tests.

Copy link

cypress bot commented Sep 3, 2024

n8n    Run #6721

Run Properties:  status check passed Passed #6721  •  git commit 9130293600: 🌳 🖥️ browsers:node18.12.0-chrome107 🤖 tomi 🗃️ e2e/*
Project n8n
Branch Review cat-89-add-benchmark-scenario-for-expressions
Run status status check passed Passed #6721
Run duration 04m 48s
Commit git commit 9130293600: 🌳 🖥️ browsers:node18.12.0-chrome107 🤖 tomi 🗃️ e2e/*
Committer Tomi Turtiainen
View all properties for this run ↗︎

Test results
Tests that failed  Failures 0
Tests that were flaky  Flaky 1
Tests that did not run due to a developer annotating a test with .skip  Pending 0
Tests that did not run due to a failure in a mocha hook  Skipped 0
Tests that passed  Passing 423
View all changes introduced in this branch ↗︎

Copy link
Contributor

github-actions bot commented Sep 3, 2024

✅ All Cypress E2E specs passed

@tomi
Copy link
Contributor Author

tomi commented Sep 3, 2024

The idea is to test how fast we can evaluate bunch of expressions.

IMO stuff like this is important,but we need to benchmark stuff like this in isolation. Maybe we need benchmarking with unit tests.

Fair point. We are not really interested in raw numbers, but rather how the performance changes over time. So having at least one scenario that includes expressions is nice to have, even tho that might not be the ideal way to test it. We can always revisit this if needed.

@tomi tomi merged commit 35e6a87 into master Sep 3, 2024
40 checks passed
@tomi tomi deleted the cat-89-add-benchmark-scenario-for-expressions branch September 3, 2024 14:09
MiloradFilipovic added a commit that referenced this pull request Sep 5, 2024
* master:
  refactor(RabbitMQ Trigger Node): Improve type-safety, add tests, and fix issues with manual triggers (#10663)
  feat(editor): Add support for nodes with multiple main inputs in new canvas (no-changelog) (#10659)
  fix(editor): Set minimum zoom to 0 to allow fitting very large workflows in new canvas (no-changelog) (#10666)
  feat(editor): Change selection to be default canvas behaviour (no-changelog) (#10668)
  feat: More hints to nodes  (#10565)
  fix(editor): Fix opening executions tab from a new, unsaved workflow (#10652)
  fix(AI Agent Node): Fix tools agent when using memory and Anthropic models (#10513)
  feat(editor): Make highlighted data pane floating (#10638)
  fix(editor): Fix workflow loading after switching to executions view in new canvas (no-changelog) (#10655)
  refactor(benchmark): Separate cloud env provisioning from running benchmarks (#10657)
  feat(core): Implement wrapping of regular nodes as AI Tools (#10641)
  refactor(editor): Remove Trial logic in personalization modal and port to script setup (#10649)
  fix(core): Declutter webhook insertion errors (#10650)
  feat: Reintroduce collaboration feature (#10602)
  feat(benchmark): Add scenario for expressions with Set node (#10647)
  feat(benchmark): Add benchmark scenario for binary files (#10648)
  build: Add `reset` script (#10627)
  feat(editor): Overhaul node insert position computation in new canvas (no-changelog) (#10637)
@janober
Copy link
Member

janober commented Sep 5, 2024

Got released with n8n@1.58.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
n8n team Authored by the n8n team Released
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants